27 research outputs found

    Computing generalized inverses using LU factorization of matrix product

    Full text link
    An algorithm for computing {2, 3}, {2, 4}, {1, 2, 3}, {1, 2, 4} -inverses and the Moore-Penrose inverse of a given rational matrix A is established. Classes A(2, 3)s and A(2, 4)s are characterized in terms of matrix products (R*A)+R* and T*(AT*)+, where R and T are rational matrices with appropriate dimensions and corresponding rank. The proposed algorithm is based on these general representations and the Cholesky factorization of symmetric positive matrices. The algorithm is implemented in programming languages MATHEMATICA and DELPHI, and illustrated via examples. Numerical results of the algorithm, corresponding to the Moore-Penrose inverse, are compared with corresponding results obtained by several known methods for computing the Moore-Penrose inverse

    The British Lexicon Project: Lexical decision data for 28,730 monosyllabic and disyllabic English words

    Get PDF
    We present a new database of lexical decision times for English words and nonwords, for which two groups of British participants each responded to 14,365 monosyllabic and disyllabic words and the same number of nonwords for a total duration of 16 h (divided over multiple sessions). This database, called the British Lexicon Project (BLP), fills an important gap between the Dutch Lexicon Project (DLP; Keuleers, Diependaele, & Brysbaert, Frontiers in Language Sciences. Psychology, 1, 174, 2010) and the English Lexicon Project (ELP; Balota et al., 2007), because it applies the repeated measures design of the DLP to the English language. The high correlation between the BLP and ELP data indicates that a high percentage of variance in lexical decision data sets is systematic variance, rather than noise, and that the results of megastudies are rather robust with respect to the selection and presentation of the stimuli. Because of its design, the BLP makes the same analyses possible as the DLP, offering researchers with a new interesting data set of word-processing times for mixed effects analyses and mathematical modeling. The BLP data are available at http://crr.ugent.be/blp and as Electronic Supplementary Materials

    Straight monotonic embedding of data sets in Euclidean spaces

    No full text
    International audienceThis paper presents a fast incremental algorithm for embedding data sets belonging to various topological spaces in Euclidean spaces. This is useful for networks whose input consists of non-Euclidean (possibly non-numerical) data, for the on-line computation of spatial maps in autonomous agent navigation problems, and for building internal representations from empirical similarity data. (C) 2002 Elsevier Science Ltd. All rights reserved

    Two methods for encoding clusters

    No full text
    International audienceThis paper presents two methods for generating numerical codes representing clusters of R-n, while preserving various topological properties of data spaces. This is useful for networks whose input, or eventually output, consists of unordered sets of points. The first method is the best one from a theoretical point of view, while the second one is more usable for large clusters in practice. (C) 2001 Elsevier Science Ltd. All rights reserved

    On not making dissimilarities Euclidean

    No full text
    Abstract. Non-metric dissimilarity measures may arise in practice e.g. when objects represented by sensory measurements or by structural descriptions are compared. It is an open issue whether such non-metric measures should be corrected in some way to be metric or even Euclidean. The reason for such corrections is the fact that pairwise metric distances are interpreted in metric spaces, while Euclidean distances can be embedded into Euclidean spaces. Hence, traditional learning methods can be used. The k-nearest neighbor rule is usually applied to dissimilarities. In our earlier study [12,13], we proposed some alternative approaches to general dissimilarity representations (DRs). They rely either on an embedding to a pseudo-Euclidean space and building classifiers there or on constructing classifiers on the representation directly. In this paper, we investigate ways of correcting DRs to make them more Euclidean (metric) either by adding a proper constant or by some concave transformations. Classification experiments conducted on five dissimilarity data sets indicate that non-metric dissimilarity measures can be more beneficial than their corrected Euclidean or metric counterparts. The discriminating power of the measure itself is more important than its Euclidean (or metric) properties.
    corecore